perm filename PUTNAM.1[LET,JMC] blob
sn#701685 filedate 1983-02-14 generic text, type C, neo UTF8
COMMENT ⊗ VALID 00002 PAGES
C REC PAGE DESCRIPTION
C00001 00001
C00002 00002 .require "let.pub[let,jmc]" source
C00012 ENDMK
C⊗;
.require "let.pub[let,jmc]" source
∂CSL Professor Hilary Putnam↓Department of Philosophy
↓Harvard University↓Cambridge, Mass. 02138∞
Dear Hilary:
In connection with a computer mail discussion of various
philosophy topics connected with artificial intelligence (AI), I have been
reading philosophy papers including your "Two Philosophical Perspectives".
Artificial intelligence requires various philosophical formalisms
including a theory of truth, and the only kinds of theories of truth I can
see how to use are correspondence theories. I have to confess that
I don't follow your reasoning finding difficulties with
correspondence theories, but I wonder if you can tell me whether
they will arise in the kind of "approximate" theory of truth required
for robots.
I have three uses for theories of truth:
1. The robot itself can use the predicate %2true%1. The main
applications are in using or generating quantified assertions about
what is true. Example: Everything Tarski says is true when repeated
by Putnam. Tarski has said and Putnam repeated "Snow is white".
Therefore, "Snow is white" is true. Therefore, snow is white.
Of course, in this example %2confirmed%1 might be substituted
for %2true%1. This seems to be inappropriate, however when we
want to use the law of the excluded middle, as in
%B∀p.true(p) ∨ true(not p)%1.
2. Evaluating and improving a robot involves reasoning about
when the sentences in the belief section of its memory will be true.
Again this reasoning involves quantification. Example: Suppose that if
the robot
has certain opportunities to experiment with the blocks on the table,
it will acquire a complete theory. All true statements in a certain
language will follow from those it puts in its database. Moreover,
if it is asked about any sentence in this language, it will be able
to tell whether the sentence is true. It may happen that if the language is
extended in a certain way, there will be true sentences the robot
cannot decide.
3. More speculatively, we would like to develop mathematical
theories of epistemology. These concern the relation between the
epistemological strategy of a program connected to an environment
and what it will succeed in discovering. Example: Conway's game of
Life is a two dimensional cellular automaton. Some years ago the
M.I.T. AI Lab hackers discovered that arrays of Life cells can form
universal, self-reproducing computers. Such arrays, perhaps involving
tens or hundreds of thousands of cells, can be programmed as physicists
of the Life World. What if any intellectual strategies would lead
them to discover by an appropriate combination of experiment and theory
that the fundamental physics of their world was the Life cellular
automaton? Would some philosophies of science, e.g. extreme operationalism,
if embedded in their programs, prevent them from discovering the
physics of their world?
The simplest form of such an epistemological
theory involves a fixed language with a fixed interpretation as
assertions about the physics of the world. We can then reason directly
about the truth of the sentences the Life World physicists generate.
Less simply, data structures may arise in their memories that we
wish to interpret as making assertions about their World.
In principle there could be many different equally good interpretations,
but by analogy with the fact that
cryptograms almost always have unique solution, it is extremely unlikely
except where there are isomorphisms.
E. F. Moore's "Gedanken Experiments with Sequential Machines"
in %2Automata Studies%1 begins a study of what strategies work in
what kinds of world.
For each of the three uses of %2true%1, it seems to me that
a correspondence theory is appropriate. Indeed I don't know how
to go about constructing any other kind. It seems that at least in the
cases where we are willing to take the "responsibility" for saying
what the world is like, correspondence theories don't encounter the
difficulties you mention in your article.
What is your present
opinion?
There is also the technical problem of getting around the
paradoxes of truth and knowledge pointed out by Montague (and
I suppose originally by Tarski). The way out is presumably to
weaken the axioms in some way analogous to the way ZF weakens
Frege's system.
Please be frank about whether you find these questions
interesting. If you are, I can give many examples in the robot
area of more-or-less practical reasoning involving the predicate
%2true%1. I also have examples of axioms concerning the truthfulness
of the output of programs and other sources of information the
robot may use.
.reg
.<< One problem with coherence theories is that once you decide
. that some approximation to the current scientific view of the
. world is coherent with its big bang, evolution of stars and
. galaxies, formation of the sun, solar systems, planets and earth,
. evolution of life, intelligence, human society and M.I.T. AI Lab,
. you get a correspondence theory of truth along with it. In the
. world you have admitted as part of the coherent explanation of
. your observations, there are correspondences between sentences
. as abstract objects and facts about the world. >>